Conversation
There was a problem hiding this comment.
Pull request overview
Updates the agent’s provider-specific default model IDs used when no COPILOT_DEFAULT_MODEL override is set, aligning defaults with newer GPT model naming across supported endpoints.
Changes:
- Switch default model for
api.githubcopilot.comfromgpt-4otogpt-4.1. - Switch default model for
models.github.aifromopenai/gpt-4otoopenai/gpt-4.1. - Switch default model for
api.openai.comfromgpt-4otogpt-4.1.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| api_endpoint = get_AI_endpoint() | ||
| match urlparse(api_endpoint).netloc: | ||
| case AI_API_ENDPOINT_ENUM.AI_API_GITHUBCOPILOT: | ||
| default_model = "gpt-4o" | ||
| default_model = "gpt-4.1" | ||
| case AI_API_ENDPOINT_ENUM.AI_API_MODELS_GITHUB: | ||
| default_model = "openai/gpt-4o" | ||
| default_model = "openai/gpt-4.1" | ||
| case AI_API_ENDPOINT_ENUM.AI_API_OPENAI: | ||
| default_model = "gpt-4o" | ||
| default_model = "gpt-4.1" |
There was a problem hiding this comment.
Changing the runtime default model from gpt-4o to gpt-4.1 makes existing documentation inaccurate (e.g., doc/GRAMMAR.md explicitly states the default is gpt-4o). Please update the docs (and any other user-facing references) to reflect the new default so users aren’t misled when they omit model: in task definitions.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| sonnet_latest: claude-sonnet-4.5 | ||
| gpt_default: gpt-4.1 | ||
| gpt_latest: gpt-5 | ||
|
|
There was a problem hiding this comment.
gpt_latest (and the previously defined Claude aliases) are removed from the example model_config, but other repository examples and docs still reference gpt_latest (e.g., README “Model configs” section, doc/GRAMMAR.md model_config example, and examples/taskflows/CVE-2023-2283.yaml uses model: gpt_latest with this config). With this change, those taskflows will pass gpt_latest through as the provider model ID and likely fail at runtime. Consider either reintroducing gpt_latest (mapping to the desired provider model) or updating the referenced docs/taskflows to use gpt_default (or another existing key).
| gpt_latest: gpt-4.1 |
| ``` | ||
|
|
||
| Note that model identifiers may differ between OpenAI compatible endpoint providers, make sure you change your model identifier accordingly when switching providers. If not specified, a default LLM model (`gpt-4o`) is used. | ||
| Note that model identifiers may differ between OpenAI compatible endpoint providers, make sure you change your model identifier accordingly when switching providers. If not specified, a default LLM model (such as `gpt-4.1`) is used. |
There was a problem hiding this comment.
This note now suggests a default model like gpt-4.1, but the actual default model identifier is endpoint-specific (e.g., models.github.ai defaults to a namespaced ID like openai/gpt-4.1 in src/seclab_taskflow_agent/agent.py). To avoid confusion for users switching endpoints, consider explicitly stating that the default model string depends on AI_API_ENDPOINT and may be namespaced, rather than implying a single identifier.
| Note that model identifiers may differ between OpenAI compatible endpoint providers, make sure you change your model identifier accordingly when switching providers. If not specified, a default LLM model (such as `gpt-4.1`) is used. | |
| Note that model identifiers, including the default model string, depend on the configured `AI_API_ENDPOINT` and may be namespaced (for example, `openai/gpt-4.1` on some endpoints). When switching providers, make sure you update your model identifier accordingly. If `model` is not specified in a task, the endpoint's own default LLM model will be used (which may be `gpt-4.1` or another provider-specific default). |
update + tested default models